artificial intelligence risk
Artificial Intelligence Risks: Training and Education
Training and education are imperative in many facets of healthcare -- from understanding clinical systems, to improving technical skills, to understanding regulations and professional standards. Technology often presents unique training challenges because of the ways in which it disrupts existing workflow patterns, alters clinical practice, and creates both predictable and unforeseen challenges. The emergence of artificial intelligence (AI), its anticipated expansion in healthcare, and its sheer scope point to significant training and educational needs for medical students and practicing healthcare providers. These needs go far beyond developing technical skills with AI programs and systems; rather, they call for a shift in the paradigm of medical learning. An AMA Journal of Ethics article titled "Reimagining Medical Education in the Age of AI" discusses how traditional medical education -- which focuses on information acquisition, retention, and application -- is insufficient, counterproductive, and potentially harmful in the era of digital medicine.
- Health & Medicine (1.00)
- Education > Curriculum > Subject-Specific Education (1.00)
Artificial Intelligence Risks: Patient Expectations
At the heart of many innovations in healthcare are patients and finding ways to improve the quality of their care and experience. This is perhaps no more true than in the case of artificial intelligence (AI), which offers vast potential for improving patient outcomes through advances in population health management, risk identification and stratification, diagnosis, and treatment. Yet even with this promise, questions arise about how patients will interact with and react to these new technologies as well as how these advances will change the provider–patient relationship. A look at other technologies reveals some insights and possible concerns. Electronic health records, for example, have been known to produce issues with communication.
Artificial Intelligence risks to grow food are substantial - CIO News
Artificial intelligence (AI) is on the cusp of driving an agricultural revolution, and helping confront the challenge of feeding our growing global population in a sustainable way. But researchers warn that using new artificial intelligence technologies at scale holds huge risks that are not being considered. Imagine a field of wheat that extends to the horizon, being grown for flour that will be made into bread to feed cities' worth of people. Imagine that all authority for tilling, planting, fertilizing, monitoring, and harvesting this field has been delegated to artificial intelligence: algorithms that control drip-irrigation systems, self-driving tractors, and combine harvesters, clever enough to respond to the weather and the exact needs of the crop. Then imagine a hacker messes things up. A new risk analysis, published recently in the journal Nature Machine Intelligence, warns that the future use of artificial intelligence in agriculture comes with substantial potential risks for farms, farmers, and food security that are poorly understood and under-appreciated.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.05)
- Asia > India > Arunachal Pradesh (0.05)
- Food & Agriculture > Agriculture (1.00)
- Materials > Chemicals > Agricultural Chemicals (0.32)
Urgent action needed over artificial intelligence risks to human rights
Urgent action is needed as it can take time to assess and address the serious risks this technology poses to human rights, warned the High Commissioner: "The higher the risk for human rights, the stricter the legal requirements for the use of AI technology should be". Ms. Bachelet also called for AI applications that cannot be used in compliance with international human rights law, to be banned. "Artificial intelligence can be a force for good, helping societies overcome some of the great challenges of our times. But AI technologies can have negative, even catastrophic, effects if they are used without sufficient regard to how they affect people's human rights". On Tuesday, the UN rights chief expressed concern about the "unprecedented level of surveillance across the globe by state and private actors", which she insisted was "incompatible" with human rights.
Trust in EU approach to artificial intelligence risks being undermined by new AI rules
The EU is winning the battle for trust among artificial intelligence (AI) researchers, academics on both sides of the Atlantic say, bolstering the Commission's ambitions to set global standards for the technology. But some fear the EU risks squandering this confidence by imposing ill-thought through rules in its recently proposed Artificial Intelligence act, which some academics say are at odds with the realities of AI research. "We do see a push for trustworthy and transparent AI also in the US, but, in terms of governance, we are not as far [ahead] as the EU in this regard," said Bart Selman, president of the Association for Advancement of Artificial Intelligence (AAAI) and a professor at Cornell University. Highly international AI researchers are "aware that AI developments in the US are dominated by business interests, and in China by the government interest," said Holger Hoos, professor of machine learning at Leiden University, and a founder of the Confederation of Laboratories for Artificial Intelligence Research in Europe (CLAIRE). EU policymaking, though slower, incorporated "more voices, and more perspectives" than the more centralised process in the US and China, he argued, with the EU having taken strong action on privacy through the General Data Protection regulation, which came into effect in 2018.
- Asia > China (0.58)
- Europe > Netherlands > South Holland > Leiden (0.25)
- South America > Brazil (0.05)
- (3 more...)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Regional Government > Europe Government (0.49)
'White' artificial intelligence risks exacerbating racial inequality, study suggests
The "whiteness" of artificial intelligence (AI) risks a "racially homogenous" workforce as humans create machines skewed by their biases, a study suggests. The University of Cambridge study examined AI in society, including in films, Google searches, stock images and robot voices. Researchers suggested machines have distinct racial identities and this perpetuates "real world" racial stereotypes. Non-abstract AI in internet search engine results usually had either Caucasian features or were the colour white, according to the researchers. Most virtual voices in devices talked in "standard white middle-class English" as "ideas of adding black dialects have been dismissed as too controversial or outside the target market," the study concluded.
- Information Technology (0.38)
- Media (0.34)
- Leisure & Entertainment (0.34)
Artificial Intelligence Risks: Black-Box Reasoning
Artificial intelligence (AI) systems and programs use data analytics and algorithms to perform functions that typically would require human intelligence and reasoning. Some types of AI are programmed to follow specific rules and logic to produce targeted outputs. In these cases, individuals can understand the reasoning behind a system's conclusions or recommendations by examining its programming and coding. However, many of today's cutting-edge AI technologies -- particularly machine learning systems that offer great promise for transforming healthcare -- have more opaque reasoning, making it difficult or impossible to determine how they produce results. This unknown functioning is referred to as "black-box reasoning" or "black-box decision-making."
- Transportation > Air (0.87)
- Health & Medicine > Therapeutic Area > Oncology (0.31)
10 Questions to Consider About Artificial Intelligence Risks - Enablon
"Is artificial intelligence less than our intelligence?" Every month, we publish three roundups on Enablon Insights: EHS Roundup, Sustainability Roundup, Risk Roundup. Each roundup highlights ten articles or online resources that caught our attention and that deserve a second look. Sometimes, one of the articles is so interesting that we write a separate post to highlight it. This is such a post.
Artificial Intelligence risks may outweigh benefits, reports Allianz - Reinsurance News
In a new report, Allianz Global Corporate & Specialty (AGCS), a division of global insurer Allianz, has claimed that the advantages offered by increasingly integrated Artificial Intelligence (AI) applications in the re/insurance industry may be outweighed by the potential threats they bring. AGCS claims that increased vulnerability to malicious cyber-attacks and technical failure, as well as the potential for larger-scale disruptions and extraordinary financial losses, pose significant risks to re/insurers as AI becomes more widely adopted in the industry, and as societies and economies become more interconnected. Additionally, insurers and reinsurers will have to contend with new liability scenarios as decision-making responsibilities shift from humans to machines and manufacturers. However, AGCS still maintains that the growing reliance on AI applications like chatbots, autonomous vehicles, and connected machines in digital factories offers many advantages for re/insurers in terms of increased efficiencies, fewer repetitive tasks, and better customer experiences. "There is huge potential for AI to improve the insurance value chain. Initially, it will help automate insurance processes to enable better delivery to our customers. Policies can be issued, and claims processed, faster and more efficiently," said Michael Bruch, Head of Emerging Trends at AGCS.